19 research outputs found

    Nonanticipating estimation applied to sequential analysis and changepoint detection

    Get PDF
    Suppose a process yields independent observations whose distributions belong to a family parameterized by \theta\in\Theta. When the process is in control, the observations are i.i.d. with a known parameter value \theta_0. When the process is out of control, the parameter changes. We apply an idea of Robbins and Siegmund [Proc. Sixth Berkeley Symp. Math. Statist. Probab. 4 (1972) 37-41] to construct a class of sequential tests and detection schemes whereby the unknown post-change parameters are estimated. This approach is especially useful in situations where the parametric space is intricate and mixture-type rules are operationally or conceptually difficult to formulate. We exemplify our approach by applying it to the problem of detecting a change in the shape parameter of a Gamma distribution, in both a univariate and a multivariate setting.Comment: Published at http://dx.doi.org/10.1214/009053605000000183 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Integrated risk of asymptotically bayes sequential tests

    Get PDF
    For general multiple-decision testing problems, and even two-decision problems involving more than two states of nature, how to construct sequential procedures which are optimal (e.g. minimax, Bayes, or even admissible) is an open question. In the absence of optimality results, many procedures have been proposed for problems in this category. Among these are the procedures studied in Wald and Sobel (1949), DonnellY. (1957), Anderson (1960), and Schwarz (1962), all of which are discussed in the introduction of the paper by Kiefer and Sacks (1963) along with investigations in sequential design of experiments (notably those of Chernoff (1959) and Albert (1961)) which can be regarded as considering, inter alia, the (non-design) sequential testing problem. The present investigation concerns certain procedures which are asymptotically Bayes as the cost per observation, c, approaches zero and are definable by a simple rule: continue sampling until the a posteriori risk of stopping is less than Qc (where Q is a fixed positive number), and choose a terminal decision having minimum a posteriori risk. This rule, with Q = 1, was first considered by Schwarz and was shown to be asymptotically Bayes, under mild assumptions, by Kiefer and Sacks (whose results easily extend to the case of arbitrary Q > 0). Given an a priori distribution, F, and cost per observation, c, we shall use δ_F( Qc) to denote the procedure defined by this rule and δ_F * (c) to denote a Bayes solution with respect to F and c. The result of Kiefer and Sacks, for Q = 1, states that rc(F, δF(c)),....., r_c(F, δ_F*(c)) as c ~ 0, where rc(F, δ) is the integrated risk of δ when F is the a priori distribution and c is the cost per observation. The principal aim of the present work is to construct upper bounds (valid for all c > 0) on the difference r_c(F, δF(Qc)) - rc(F, δF*(c)), so that one can determine values of c (or the probabilities of error) small enough to insure that simple asymptotically optimum procedures are reasonably efficient

    Node Synchronization for the Viterbi Decoder

    Get PDF
    Motivated by the needs of NASA's Voyager 2 mission, in this paper we describe an algorithm which detects and corrects losses of node synchronization in convolutionally encoded data. This algorithm, which would be implemented as a hardware device external to a Viterbi decoder, makes statistical decisions about node synch based on the hard-quantized undecoded data stream. We will show that in a worst-case Voyager environment, our method will detect and correct a true loss of synch (thought to be a very rare event) within several hundred bits; many of the resulting outages will be corrected by the outer Reed-Solomon code. At the same time, the mean time between false alarms is on the order of several years, independent of the signal-to-noise ratio

    2-SPRT'S and the modified Kiefer-Weiss problem of minimizing an expected sample size

    Get PDF
    A simple combination of one-sided sequential probability ratio tests, called a 2-SPRT, is shown to approximately minimize the expected sample size at a given point θ0 among all tests with error probabilities controlled at two other points, θ1 and θ2. In the symmetric normal and binomial testing problems, this result applies directly to the Kiefer-Weiss problem of minimizing the maximum over θ of the expected sample size. Extensive computer calculations for the normal case indicate that 2-SPRT's have efficiencies greater than 99% regardless of the size of the error probabilities. Accurate approximations to the error probabilities and expected sample sizes of these tests are given

    Open-ended tests for Koopman-Darmois families

    Get PDF
    The generalized likelihood ratio is used to define a stopping rule for rejecting the null hypothesis θ = θ0 in favor of θ > θ0. Subject to a bound ι on the probability of ever stopping in case θ = θ0, the expected sample sizes for θ > θ0 are minimized within a multiple of log log ι-1, the multiple depending on θ. An heuristic bound on the error probability of a likelihood ratio procedure is derived and verified in the case of a normal mean by consideration of a Wiener process. Useful lower bounds on the small-sample efficiency in the normal case are thereby obtained

    On excess over the boundary

    Get PDF
    A random walk, {Sn}∞n = 0 , having positive drift and starting at the origin, is stopped the first time Sn > t ≧ 0. The present paper studies the "excess," Sn - t, when the walk is stopped. The main result is an upper bound on the mean of the excess, uniform in t. Through Wald's equation, this gives an upper bound on the mean stopping time, as well as upper bounds on the average sample numbers of sequential probability ratio tests. The same elementary approach yields simple upper bounds on the moments and tail probabilities of residual and spent waiting times of renewal processes

    Likelihood ratio tests for sequential k-decision problems

    Get PDF
    Sequential tests of separated hypotheses concerning the parameter θ of a Koopman-Darmois family are studied from the point of view of minimizing expected sample sizes pointwise in θ subject to error probability bounds. Sequential versions of the (generalized) likelihood ratio test are shown to exceed the minimum expected sample sizes by at most M log log ι(-1) uniformly in θ, where ι is the smallest error probability bound. The proof considers the likelihood ratio tests as ensembles of sequential probability ratio tests and compares them with alternative procedures by constructing alternative ensembles, applying a simple inequality of Wald and a new inequality of similar type. A heuristic approximation is given for the error probabilities of likelihood ratio tests, which provides an upper bound in the case of a normal mean

    Asymptotic efficiency of three-stage hypothesis tests

    Get PDF
    Multi-stage hypothesis tests are studied as competitors of sequential tests. A class of three-stage tests for the one-dimensional exponential family is shown to be asymptotically efficient, whereas two-stage tests are not. Moreover, in order to be asymptotically optimal, three-stage tests must mimic the behavior of sequential tests. Similar results are obtained for the problem of testing two simple hypotheses

    Nearly-optimal sequential tests for finitely many parameter values

    Get PDF
    Combinations of one-sided sequential probability ratio tests (SPRT's) are shown to be "nearly optimal" for problems involving a finite number of possible underlying distributions. Subject to error probability constraints, expected sample sizes (or weighted averages of them) are minimized to within o(1) asymptotically. For sequential decision problems, simple explicit procedures are proposed which "do exactly what a Bayes solution would do" with probability approaching one as the cost per observation, c, goes to zero. Exact computations for a binomial testing problem show that efficiencies of about 97% are obtained in some "small-sample" cases

    Introduction to statistical inference

    No full text
    corecore